虽然工业互联网的东西已经增加了工业设备中永久安装的传感器数量,但由于在石化工业中非常大的植物中的传感器或稀疏密度,覆盖率将存在差距。现代应急响应操作开始使用具有能够将传感器机器人丢弃到精确位置的小型无人机系统(SUAS)。 SUA可以提供长期持续监控,即航空无人机无法提供。尽管这些资产的成本相对较低,但是选择哪个机器人传感系统部署在紧急响应期间复杂的植物环境中的工业过程中的哪一部分仍然具有挑战性。本文介绍了一种优化应急传感器部署作为实现机器人在灾区响应的初步步骤的框架。 AI技术(长期内存,1维卷积神经网络,逻辑回归和随机林)识别传感器最有价值的区域,而无需人类进入潜在的危险区域。在描述的情况下,优化的成本函数考虑了假阳性和假阴性错误的成本。减缓的决定包括实施维修或关闭工厂。信息(EVI)的预期值用于识别要部署的最有价值的类型和物理传感器的位置,以增加传感器网络的决策分析值。该方法应用于使用化学植物的田纳西州伊士曼流程数据集的案例研究,我们讨论了我们对植物紧急情况和弹性情景中传感器的操作,分配和决策的影响的影响。
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g. a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g. real-world to simulated kitchen) and agent embodiments (e.g. bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g. collected as human videos, for learning in any number of target domains.
translated by 谷歌翻译
Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image. Unlike related navigation tasks, ImageNav does not have a standardized task definition which makes comparison across methods difficult. Further, existing formulations have two problematic properties; (1) image-goals are sampled from random locations which can lead to ambiguity (e.g., looking at walls), and (2) image-goals match the camera specification and embodiment of the agent; this rigidity is limiting when considering user-driven downstream applications. We present the Instance-specific ImageNav task (InstanceImageNav) to address these limitations. Specifically, the goal image is 'focused' on some particular object instance in the scene and is taken with camera parameters independent of the agent. We instantiate InstanceImageNav in the Habitat Simulator using scenes from the Habitat-Matterport3D dataset (HM3D) and release a standardized benchmark to measure community progress.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
We present the Habitat-Matterport 3D Semantics (HM3DSEM) dataset. HM3DSEM is the largest dataset of 3D real-world spaces with densely annotated semantics that is currently available to the academic community. It consists of 142,646 object instance annotations across 216 3D spaces and 3,100 rooms within those spaces. The scale, quality, and diversity of object annotations far exceed those of prior datasets. A key difference setting apart HM3DSEM from other datasets is the use of texture information to annotate pixel-accurate object boundaries. We demonstrate the effectiveness of HM3DSEM dataset for the Object Goal Navigation task using different methods. Policies trained using HM3DSEM perform outperform those trained on prior datasets. Introduction of HM3DSEM in the Habitat ObjectNav Challenge lead to an increase in participation from 400 submissions in 2021 to 1022 submissions in 2022.
translated by 谷歌翻译
我们提出了一种将运动传递给静止2D图像的方法。我们的方法使用深度学习将图像的一部分划分为主题,然后使用绘制来完成背景,并最终通过将图像嵌入三角形网格中,同时保留其余图像,从而在主题中添加动画。
translated by 谷歌翻译
自从深网的成立以来,训练模型所需的计算资源一直在增加。大规模数据集中的培训神经网络已成为一项具有挑战性且耗时的任务。因此,需要减少数据集而不损害准确性。在本文中,我们介绍了一种早期方法,即通过均匀聚类来减少数据集大小的新颖方法。所提出的方法基于将数据集划分为均匀簇的想法,并选择对准确性产生显着贡献的图像。我们提出了两种变体:用于图像数据降低的几何均匀聚类(GHCIDR)和合并GHCIDR在基线算法 - 通过均匀聚类(RHC)降低(RHC),以实现更好的准确性和训练时间。 GHCIDR背后的直觉涉及通过簇权重和训练集的几何分布选择数据点。合并GHCIDR涉及使用完整的链接聚类的群集合并相同的标签。我们使用了三个深度学习模型 - 完全连接的网络(FCN),VGG1和VGG16。我们在四个数据集中进行了两个变体 - MNIST,CIFAR10,Fashion-Mnist和Tiny-Imagenet。与RHC相同百分比的合并GHCIDR在MNIST,Fashion-Mnist,CIFAR10和Tiny-Imagenet上分别增加了2.8%,8.9%,7.6%和3.5%。
translated by 谷歌翻译
从机器学习模型中删除指定的培训数据子集的影响可能需要解决隐私,公平和数据质量等问题。删除子集后剩余数据从头开始对模型进行重新审查是有效但通常是不可行的,因为其计算费用。因此,在过去的几年中,已经看到了几种有效拆除的新方法,形成了“机器学习”领域,但是,到目前为止,出版的文献的许多方面都是不同的,缺乏共识。在本文中,我们总结并比较了七个最先进的机器学习算法,合并对现场中使用的核心概念的定义,调和不同的方法来评估算法,并讨论与在实践中应用机器相关的问题。
translated by 谷歌翻译